Goto

Collaborating Authors

 convex relaxation


On Quadratic Convergence of DC Proximal Newton Algorithm in Nonconvex Sparse Learning

Neural Information Processing Systems

We propose a DC proximal Newton algorithm for solving nonconvex regularized sparse learning problems in high dimensions. Our proposed algorithm integrates the proximal newton algorithm with multi-stage convex relaxation based on the difference of convex (DC) programming, and enjoys both strong computational and statistical guarantees. Specifically, by leveraging a sophisticated characterization of sparse modeling structures (i.e., local restricted strong convexity and Hessian smoothness), we prove that within each stage of convex relaxation, our proposed algorithm achieves (local) quadratic convergence, and eventually obtains a sparse approximate local optimum with optimal statistical properties after only a few convex relaxations. Numerical experiments are provided to support our theory.






246a3c5544feb054f3ea718f61adfa16-Paper.pdf

Neural Information Processing Systems

Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories:exact verifiers that run in exponential time andrelaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework.